Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Behav Res Methods ; 54(1): 233-251, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34145547

RESUMO

When people seek to understand concepts from an incomplete set of examples and counterexamples, there is usually an exponentially large number of classification rules that can correctly classify the observed data, depending on which features of the examples are used to construct these rules. A mechanistic approximation of human concept-learning should help to explain how humans prefer some rules over others when there are many that can be used to correctly classify the observed data. Here, we exploit the tools of propositional logic to develop an experimental framework that controls the minimal rules that are simultaneously consistent with the presented examples. For example, our framework allows us to present participants with concepts consistent with a disjunction and also with a conjunction, depending on which features are used to build the rule. Similarly, it allows us to present concepts that are simultaneously consistent with two or more rules of different complexity and using different features. Importantly, our framework fully controls which minimal rules compete to explain the examples and is able to recover the features used by the participant to build the classification rule, without relying on supplementary attention-tracking mechanisms (e.g. eye-tracking). We exploit our framework in an experiment with a sequence of such competitive trials, illustrating the emergence of various transfer effects that bias participants' prior attention to specific sets of features during learning.


Assuntos
Formação de Conceito , Lógica , Viés , Humanos , Aprendizagem
2.
PLoS Comput Biol ; 17(1): e1008598, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33465081

RESUMO

Working memory capacity can be improved by recoding the memorized information in a condensed form. Here, we tested the theory that human adults encode binary sequences of stimuli in memory using an abstract internal language and a recursive compression algorithm. The theory predicts that the psychological complexity of a given sequence should be proportional to the length of its shortest description in the proposed language, which can capture any nested pattern of repetitions and alternations using a limited number of instructions. Five experiments examine the capacity of the theory to predict human adults' memory for a variety of auditory and visual sequences. We probed memory using a sequence violation paradigm in which participants attempted to detect occasional violations in an otherwise fixed sequence. Both subjective complexity ratings and objective violation detection performance were well predicted by our theoretical measure of complexity, which simply reflects a weighted sum of the number of elementary instructions and digits in the shortest formula that captures the sequence in our language. While a simpler transition probability model, when tested as a single predictor in the statistical analyses, accounted for significant variance in the data, the goodness-of-fit with the data significantly improved when the language-based complexity measure was included in the statistical model, while the variance explained by the transition probability model largely decreased. Model comparison also showed that shortest description length in a recursive language provides a better fit than six alternative previously proposed models of sequence encoding. The data support the hypothesis that, beyond the extraction of statistical knowledge, human sequence coding relies on an internal compression using language-like nested structures.


Assuntos
Memória de Curto Prazo/fisiologia , Modelos Psicológicos , Adulto , Algoritmos , Biologia Computacional , Compressão de Dados , Humanos , Idioma , Modelos Estatísticos
3.
Phys Rev E ; 101(4-1): 042128, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-32422757

RESUMO

Recent approaches to human concept learning have successfully combined the power of symbolic, infinitely productive rule systems and statistical learning to explain our ability to learn new concepts from just a few examples. The aim of most of these studies is to reveal the underlying language structuring these representations and providing a general substrate for thought. However, describing a model of thought that is fixed once trained is against the extensive literature that shows how experience shapes concept learning. Here, we ask about the plasticity of these symbolic descriptive languages. We perform a concept learning experiment that demonstrates that humans can change very rapidly the repertoire of symbols they use to identify concepts, by compiling expressions that are frequently used into new symbols of the language. The pattern of concept learning times is accurately described by a Bayesian agent that rationally updates the probability of compiling a new expression according to how useful it has been to compress concepts so far. By portraying the language of thought as a flexible system of rules, we also highlight the difficulties to pin it down empirically.

4.
Neuroimage ; 186: 245-255, 2019 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-30449729

RESUMO

Memory for spatial sequences does not depend solely on the number of locations to be stored, but also on the presence of spatial regularities. Here, we show that the human brain quickly stores spatial sequences by detecting geometrical regularities at multiple time scales and encoding them in a format akin to a programming language. We measured gaze-anticipation behavior while spatial sequences of variable regularity were repeated. Participants' behavior suggested that they quickly discovered the most compact description of each sequence in a language comprising nested rules, and used these rules to compress the sequence in memory and predict the next items. Activity in dorsal inferior prefrontal cortex correlated with the amount of compression, while right dorsolateral prefrontal cortex encoded the presence of embedded structures. Sequence learning was accompanied by a progressive differentiation of multi-voxel activity patterns in these regions. We propose that humans are endowed with a simple "language of geometry" which recruits a dorsal prefrontal circuit for geometrical rules, distinct from but close to areas involved in natural language processing.


Assuntos
Idioma , Reconhecimento Visual de Modelos/fisiologia , Córtex Pré-Frontal/fisiologia , Resolução de Problemas/fisiologia , Percepção Espacial/fisiologia , Adulto , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Memória , Desempenho Psicomotor , Movimentos Sacádicos , Adulto Jovem
5.
PLoS One ; 13(7): e0200420, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29990351

RESUMO

Probabilistic proposals of Language of Thoughts (LoTs) can explain learning across different domains as statistical inference over a compositionally structured hypothesis space. While frameworks may differ on how a LoT may be implemented computationally, they all share the property that they are built from a set of atomic symbols and rules by which these symbols can be combined. In this work we propose an extra validation step for the set of atomic productions defined by the experimenter. It starts by expanding the defined LoT grammar for the cognitive domain with a broader set of arbitrary productions and then uses Bayesian inference to prune the productions from the experimental data. The result allows the researcher to validate that the resulting grammar still matches the intuitive grammar chosen for the domain. We then test this method in the language of geometry, a specific LoT model for geometrical sequence learning. Finally, despite the fact of the geometrical LoT not being a universal (i.e. Turing-complete) language, we show an empirical relation between a sequence's probability and its complexity consistent with the theoretical relationship for universal languages described by Levin's Coding Theorem.


Assuntos
Linguística , Modelos Teóricos , Aprendizagem por Probabilidade , Pensamento , Teorema de Bayes , Cognição , Humanos
6.
Phys Rev Lett ; 118(13): 130401, 2017 Mar 31.
Artigo em Inglês | MEDLINE | ID: mdl-28409963

RESUMO

Quantum mechanics postulates random outcomes. However, a model making the same output predictions but in a deterministic manner would be, in principle, experimentally indistinguishable from quantum theory. In this work we consider such models in the context of nonlocality on a device-independent scenario. That is, we study pairs of nonlocal boxes that produce their outputs deterministically. It is known that, for these boxes to be nonlocal, at least one of the boxes' outputs has to depend on the other party's input via some kind of hidden signaling. We prove that, if the deterministic mechanism is also algorithmic, there is a protocol that, with the sole knowledge of any upper bound on the time complexity of such an algorithm, extracts that hidden signaling and uses it for the communication of information.

7.
PLoS Comput Biol ; 13(1): e1005273, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-28125595

RESUMO

During language processing, humans form complex embedded representations from sequential inputs. Here, we ask whether a "geometrical language" with recursive embedding also underlies the human ability to encode sequences of spatial locations. We introduce a novel paradigm in which subjects are exposed to a sequence of spatial locations on an octagon, and are asked to predict future locations. The sequences vary in complexity according to a well-defined language comprising elementary primitives and recursive rules. A detailed analysis of error patterns indicates that primitives of symmetry and rotation are spontaneously detected and used by adults, preschoolers, and adult members of an indigene group in the Amazon, the Munduruku, who have a restricted numerical and geometrical lexicon and limited access to schooling. Furthermore, subjects readily combine these geometrical primitives into hierarchically organized expressions. By evaluating a large set of such combinations, we obtained a first view of the language needed to account for the representation of visuospatial sequences in humans, and conclude that they encode visuospatial sequences by minimizing the complexity of the structured expressions that capture them.


Assuntos
Compreensão/fisiologia , Formação de Conceito/fisiologia , Idioma , Conceitos Matemáticos , Modelos Educacionais , Terminologia como Assunto , Adulto , Algoritmos , Pré-Escolar , Cultura , Humanos , Indígenas Sul-Americanos , Masculino , Matemática/educação
8.
Phys Rev Lett ; 116(23): 230402, 2016 Jun 10.
Artigo em Inglês | MEDLINE | ID: mdl-27341214

RESUMO

Many experimental setups in quantum physics use pseudorandomness in places where the theory requires randomness. In this Letter we show that the use of pseudorandomness instead of proper randomness in quantum setups has potentially observable consequences. First, we present a new loophole for Bell-like experiments: if some of the parties choose their measurements pseudorandomly, then the computational resources of the local model have to be limited in order to have a proper observation of nonlocality. Second, we show that no amount of pseudorandomness is enough to produce a mixed state by computably choosing pure states from some basis.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...